2,703 research outputs found

    Ozone: Efficient Execution with Zero Timing Leakage for Modern Microarchitectures

    Full text link
    Time variation during program execution can leak sensitive information. Time variations due to program control flow and hardware resource contention have been used to steal encryption keys in cipher implementations such as AES and RSA. A number of approaches to mitigate timing-based side-channel attacks have been proposed including cache partitioning, control-flow obfuscation and injecting timing noise into the outputs of code. While these techniques make timing-based side-channel attacks more difficult, they do not eliminate the risks. Prior techniques are either too specific or too expensive, and all leave remnants of the original timing side channel for later attackers to attempt to exploit. In this work, we show that the state-of-the-art techniques in timing side-channel protection, which limit timing leakage but do not eliminate it, still have significant vulnerabilities to timing-based side-channel attacks. To provide a means for total protection from timing-based side-channel attacks, we develop Ozone, the first zero timing leakage execution resource for a modern microarchitecture. Code in Ozone execute under a special hardware thread that gains exclusive access to a single core's resources for a fixed (and limited) number of cycles during which it cannot be interrupted. Memory access under Ozone thread execution is limited to a fixed size uncached scratchpad memory, and all Ozone threads begin execution with a known fixed microarchitectural state. We evaluate Ozone using a number of security sensitive kernels that have previously been targets of timing side-channel attacks, and show that Ozone eliminates timing leakage with minimal performance overhead

    Exploiting implicit parallelism in SPARC instruction execution

    Get PDF
    One way to increase the performance of a processing unit is to exploit implicit parallelism. Exploiting this parallelism requires a processor to dynamically select instructions in a serial instruction stream which can be executed in parallel. As operations are computed concurrently, an execution speedup will occur. This thesis studies how effectively implicit parallelism could be exploited in the Scalable Pro cessor Architecture (SPARC)[9], a reduced instruction set architecture developed by Sun Microsystems. First an analysis of SPARC instruction traces will determine the optimal speedup that would be realized by a processor with infinite resources. Next, an analytical model of a parallelizing processor will be developed and used to predict the effects of limited resources on optimal speedup. Lastly, a SPARC simulator will be employed to determine the actual speedup of resource limited configurations, and the results will be correlated with the analytical model

    Engineering Fluorescent Nanodiamonds

    Get PDF
    We have developed a technique to grow fluorescent nanodiamonds under high-pressure and high-temperature conditions by using diamondoid seed molecules and decomposing hydrocarbons into reactive carbon species, such as radicals, to grow nanodiamonds onto these seeds. Furthermore, by using specially designed derivatives of diamondoids as seeds, the process should allow the near-deterministic creation of fluorescent color centers inside the grown nanodiamond. In addition, due to the relatively low growth temperatures, we can grow the diamonds slowly so as to produce nanodiamonds of exceptionally high quality. Such a technology impacts fields such as single spin imaging, bio-labeling and quantum computing. In the future, this technology could be used to engineer single quantum systems, opening the door to a new era of sensing technology on scales smaller than ever before

    A Comparison of Two Scaling Techniques to Reduce Uncertainty in Predictive Models

    Get PDF
    This research examines the use of two scaling techniques to accurately transfer information from small-scale data to large-scale predictions in a handful of nonlinear functions. The two techniques are (1) using random draws from distributions that represent smaller time scales and (2) using a single draw from a distribution representing the mean over all time represented by the model. This research used simulation to create the underlying distributions for the variable and parameters of the chosen functions which were then scaled accordingly. Once scaled, the variable and parameters were plugged into our chosen functions to give an output value. Using simulation, output distributions were created for each combination of scaling technique, underlying distribution, variable bounds, and parameter bounds. These distributions were then compared using a variety of statistical tests, measures, and graphical plots

    Engineering Fluorescent Nanodiamonds

    Get PDF
    We have developed a technique to grow fluorescent nanodiamonds under high-pressure and high-temperature conditions by using diamondoid seed molecules and decomposing hydrocarbons into reactive carbon species, such as radicals, to grow nanodiamonds onto these seeds. Furthermore, by using specially designed derivatives of diamondoids as seeds, the process should allow the near-deterministic creation of fluorescent color centers inside the grown nanodiamond. In addition, due to the relatively low growth temperatures, we can grow the diamonds slowly so as to produce nanodiamonds of exceptionally high quality. Such a technology impacts fields such as single spin imaging, bio-labeling and quantum computing. In the future, this technology could be used to engineer single quantum systems, opening the door to a new era of sensing technology on scales smaller than ever before

    Bethlehem Steel Ruins

    Get PDF

    Maryland, the Marine Hospital Service, and the Medical Relief of Chesapeake Oyster Dredgers, 1870-1900

    Get PDF
    The thesis will be challenging the notion that the federal government took a hands-off approach to industrial health during the Gilded Age by examining the stances taken by the Maryland government and that of the federal Marine Hospital Service (MHS) in specific relation to oyster dredgers of the Chesapeake Bay. It will highlight the important role played by newly professionalized bureaucracies in developing public policy through its examination of the creation of the MHS Relief Station at Solomons Island in Southern Maryland. It will also show that policymakers viewed the Chesapeake Bay as an industrial space and how that construction refracted responses to the oyster dredgers’ health problems

    The WKB approximation for a linear potential and ceiling

    Get PDF
    The physical problem this thesis deals with is a quantum system with linear potential driving a particle away from a ceiling (impenetrable barrier). This thesis will construct the WKB approximation of the quantum mechanical propagator. The application of the approximation will be for propagators corresponding to both initial momentum data and initial position data. Although the analytic solution for the propagator exists, it is an indefinite integral of Airy functions and di±cult to use in obtaining probability densities by numerical integration or other schemes considered by the author. The WKB construction is less problematic because it is representable in exact form, and integration schemes (both numerical and analytic) to obtain probability densities are straightforward to implement. Another purpose of this thesis is to be a starting point for the construction of WKB propagators with general potentials but the same type of boundary, impenetrable barrier. Research pertaining to this thesis includes determining all classical paths and constraints for the one-dimensional linear potential with ceiling, and using these equations to construct the classical action, and hence the WKB approximation. Also, evaluation of final quantum wave functions using numerical integration to check and better understand the approximation is part of the research. The results indicate that the validity of the WKB approximation depends on the type of classical paths (i.e. the initial data of the path) used in the construction. Specifically, the presence of the ceiling may cause the semi-classical wave packets to become vanishingly small in one representation of initial classical data, while not effecting the packets in another. The conclusion of this phenomenon is that the representation where the packets are not annihilated is the correct representation

    Circulation and cross-shelf transport in the Florida Big Bend

    Get PDF
    The Florida Big Bend region in the northeastern Gulf of Mexico contains both spawning sites and nursery habitats for a variety of economically valuable marine species. One species, the gag grouper (Mycteroperca microlepis), relies on the shelf circulation to distribute larvae from shelf-break spawning grounds to coastal sea-grass nurseries each spring. Therefore, identifying the dominant circulation features and physical mechanisms that contribute to cross-shelf transport during the springtime is a necessary step in understanding the variation of the abundance of this reef fish. The mean circulation features and onshore transport pathways are investigated using a numerical ocean model with very high horizontal resolution (800–900 m) over the period 2004–2010. The model simulation demonstrates that the mean springtime shelf circulation patterns are set primarily by flow during periods of southeastward or northwestward wind stress, and that significant cross-shelf flow is generated during southeastward winds. Lagrangian particle tracking experiments demonstrate that a primary pathway exists south of Apalachicola Bay by which particles are able to reach inshore, and that significantly more particles arrive inshore when they originate from an area adjacent to a known gag spawning aggregation site. The results provide, for the first time, a description of the pathways by which onshore transport is possible from gag spawning sites at the shelf break to sea-grass nurseries at the coast in the Florida Big Bend
    • …
    corecore